Supelock β€” Cryptographic Identity for AI Agents
[ April 2026 ]

I Built a Cryptographic Identity Layer for AI Agents. Here's Why.

Written by Sagnik Roy Β· April 2026

A few months ago I was reading about how Perplexity got caught scraping websites by disguising their crawler as a regular Chrome browser. No identity, no declaration of intent, just a bot pretending to be a human. The websites had no way to tell the difference.

That bothered me. Not because scraping is inherently bad β€” automation is useful β€” but because there was no mechanism for a bot to say "hey, I'm a bot, here's who I am, here's what I'm doing." The web had no passport system for automated agents. And with AI agents about to flood the internet, that gap was going to get a lot more painful.

So I started building one.

1. The Problem

Right now, when any AI agent hits your API, you see a request. That's it. You don't know if it's a legitimate automation from a paying customer, a scraper, a bot farm, or someone testing your system. They all look identical at the HTTP level.

Your options are basically:

None of these actually solve the problem. They're all guessing. What if instead of guessing, you could verify?

"Move from guessing intent to verifying declared intent."

That's the idea behind Supelock. Every agent gets a cryptographic identity. Every request it makes is signed with that identity and declares what it intends to do. The server verifies the signature and decides what access to give β€” based on proof, not heuristics.

2. How it works

I built the system in four parts. Each one handles a specific layer of the problem.

SDK
agent signs requests
β†’
Registry
stores public keys
β†’
Middleware
verifies + enforces
β†’
Dashboard
monitors + controls

The SDK runs on the agent side. When an agent makes an HTTP request, the SDK builds a payload containing the actor ID, method, path, declared intent, a nonce, and expiry timestamp. It signs this payload with an Ed25519 private key that never leaves the machine. The signed token gets attached as an HTTP header.

from supelock.actor import Actor actor = Actor("my-agent", registry_url="http://registry:8001") actor.register() response = actor.request( method="GET", url="https://api.example.com/data", intent={"action": "read_data"}, )

The request now carries two extra headers: X-Supelock-Actor and X-Supelock-Intent. The server can verify both.

The Registry is a small FastAPI service that maps actor IDs to public keys. When an agent registers for the first time, it submits its public key. When a server needs to verify a request, it looks up the key here. It supports revocation β€” if an agent is compromised, one DELETE call and every future request from that actor is rejected immediately.

The Middleware drops into any FastAPI application. On every incoming request it checks for Supelock headers, fetches the public key from the Registry, verifies the Ed25519 signature, checks expiry, prevents replay attacks, and validates that the signed method and path match the actual request. Then it runs a policy engine.

The Dashboard is a Next.js application that shows every request hitting the middleware in real time β€” color coded by trust level β€” and lets you edit the policy config from a browser UI without restarting anything.

3. The policy engine

Verification alone isn't enough. Knowing who an agent is doesn't tell you what they should be allowed to do. That's what the policy engine handles.

A site owner drops a supelock.yaml file in their project:

default_policy: anonymous policies: verified_agent: rate_limit: "1000/minute" allowed_intents: ["read_data", "create_order"] allowed_paths: ["*"] trust_required: high anonymous: rate_limit: "10/minute" allowed_intents: ["*"] allowed_paths: ["*"] trust_required: low actors: "my-agent": verified_agent

That's it. Verified agents get 1000 requests per minute. Anonymous traffic gets 10. The policy engine enforces four checks in order: trust gate, path rules, intent allow-list, and sliding window rate limit. First failure short-circuits and returns the appropriate HTTP error. No code changes needed.

1000x
rate limit difference
verified vs anonymous
Ed25519
same curve as SSH,
Signal, and Tor
~10min
to integrate into
an existing FastAPI app

4. What I actually built

Four GitHub repos, all working together:

The policy editor was one of the parts I'm most happy with. You can add policies, change rate limits, assign actors to tiers, and hit Save β€” it writes the YAML file and hot-reloads the engine in under a second. No restart. Changes take effect on the very next request.

5. The honest state of things

I want to be straight about where this is. Supelock is a proof of concept for a protocol, not a finished product. The cryptographic loop works. The policy engine works. The dashboard works. You can run the full stack locally in about 10 minutes.

But it has a two-sided adoption problem. For Supelock to be useful, both sides need to participate β€” agents need the SDK installed, and APIs need the middleware installed. Neither side has a strong reason to go first until the other side is already there.

The way I see this getting traction is one of three paths:

None of that has happened yet. But the underlying problem is real and getting more acute as AI agents become a bigger part of how software interacts with the web. The timing feels right to be building the infrastructure before everyone else builds their own incompatible version.

6. What's next

The immediate open items I'm thinking about:

If any of this sounds interesting to you β€” whether you're building agents, running APIs, or just think the problem is worth solving β€” the code is all open source. Issues, PRs, and opinions are welcome.